65 research outputs found

    Spherical Image Processing for Accurate Visual Odometry with Omnidirectional Cameras

    Get PDF
    International audienceDue to their omnidirectional view, the use of catadioptric cameras is of great interest for robot localization and visual servoing. For simplicity, most vision-based algorithms use image processing tools (e.g. image smoothing) that were designed for perspective cameras. This can be a good approximation when the camera displacement is small with respect to the distance to the observed environment. Otherwise, perspective image processing tools are unable to accurately handle the signal distortion that is induced by the specific geometry of omnidirectional cameras. In this paper, we propose an appropriate spherical image processing for increasing the accuracy of visual odometry estimation. The omnidirectional images are mapped onto a unit sphere and treated in the spherical spectral domain. The spherical image processing take into account the specific geometry of omnidirectional cameras. For example we can design, a more accurate and more repeatable Harris interest point detector. The interest points can be matched between two images with a large baseline in order to accurately estimate the camera motion. We demonstrate with a real experiment the accuracy of the visual odometry obtained using the spherical image processing and the improvement with respect to the use of a standard perspective image processing

    Visual servoing based mobile robot navigation able to deal with complete target loss

    Full text link
    International audienceThis paper combines the reactive collision avoidance methods with image-based visual servoing control for mobile robot navigation in an indoor environment. The proposed strategy allows the mobile robot to reach a desired position, described by a natural visual target, among unknown obstacles. While the robot avoids the obstacles, the camera could lose its target, which makes visual servoing fail. We propose in this paper a strategy to deal with the loss of visual features by taking advantage of the odometric data sensing. Obstacles are detected by the laser range finder and their boundaries are modeled using B-spline curves. We validate our strategy in a real experiment for an indoor mobile robot navigation in presence of obstacles

    3D Human Pose Estimation with a Catadioptric Sensor in Unconstrained Environments Using an Annealed Particle Filter

    Get PDF
    The purpose of this paper is to investigate the problem of 3D human tracking in complex environments using a particle filter with images captured by a catadioptric vision system. This issue has been widely studied in the literature on RGB images acquired from conventional perspective cameras, while omnidirectional images have seldom been used and published research works in this field remains limited. In this study, the Riemannian varieties was considered in order to compute the gradient on spherical images and generate a robust descriptor used along with an SVM classifier for human detection. Original likelihood functions associated with the particle filter are proposed, using both geodesic distances and overlapping regions between the silhouette detected in the images and the projected 3D human model. Our approach was experimentally evaluated on real data and showed favorable results compared to machine learning based techniques about the 3D pose accuracy. Thus, the Root Mean Square Error (RMSE) was measured by comparing estimated 3D poses and truth data, resulting in a mean error of 0.065 m when walking action was applied

    Image-Based Lateral Position, Steering Behavior Estimation, and Road Curvature Prediction for Motorcycles

    Get PDF
    International audienceThis letter presents an image-based approach to simultaneously estimate the lateral position of a powered-two-wheeled vehicle on the road, its steering behavior and predict the road curvature ahead of the motorcycle. This letter is based on the inverse perspective mapping technique combined with a road lanes detection algorithm capable of detecting straight and curved lanes. Then, a clothoid model is used to extract pertinent information from the detected road markers. Finally, the performance of the proposed approach is illustrated through simulations carried out with the well-known motorcycle simulator “BikeSim.” The results are very promising since the algorithm is capable of estimating, in real time, the road geometry and the vehicle location with a better accuracy than the one given by the commercial GPS

    Inverse Perspective Mapping Roll Angle Estimation for Motorcycles

    Get PDF
    International audienceThis paper presents an image-based approach to estimate the motorcycle roll angle. The algorithm estimates directly the absolute roll to the road plane by means of a basic monocular camera. This means that the estimated roll angle is not affected by the road bank which is often a problem for vehicle observation and control purposes. For each captured image, the algorithm uses a numeric roll loop based on some simple knowledge of the road geometry. For each iteration, a bird-eye-view of the road is generated with the inverse perspective mapping technique. Then, a road marker filter associated with the well-known clothoid model are used respectively to track the road separation lanes and approximate them with mathematical functions. Finally, the algorithm computes two distinct areas between the two-road separation lanes. Its performances are tested by means of the motorcycle simulator BikeSim. This approach is very promising since it does not require any vehicle or tire model and is free of restrictive assumptions on the dynamics

    Powered Two-Wheeled Vehicles Steering Behavior Study: Vision-Based Approach

    Get PDF
    International audienceThis paper presents a vision-based approach to prevent dangerous steering situations when riding a motorcycle in turn. In other words, the proposed algorithm is capable of detecting under, neutral or over-steering behavior using only a conventional camera and an inertial measurement unit. The inverse perspective mapping technique is used to reconstruct a bird-eye-view of the road image. Then, filters are applied to keep only the road markers which are, afterwards, approximated with the well-known clothoid model. That allows to predict the road geometry such that the curvature ahead of the motorcycle. Finally, from the predicted road curvature, the measures of the Euler angles and the vehicle speed, the proposed algorithm is able to characterize the steering behavior. To that end, we propose to estimate the steering ratio and we introduce new pertinent indicators such that the vehicle relative position dynamics to the road. The method is validated on the advanced simulator BikeSim during a steady turn

    Efficient Industrial Solution for Robotic Task Sequencing Problem With Mutual Collision Avoidance & Cycle Time Optimization

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksIn the automotive industry, several robots are required to simultaneously carry out welding sequences on the same vehicle. Coordinating and assigning welding points between robots is a manual and difficult phase that needs to be optimized using automatic tools. The cycle time of the cell strongly depends on different robotic factors such as the task allocation among the robots, the configuration solutions and obstacle avoidance. Moreover, a key aspect, often neglected in the state of the art, is to define a strategy to solve the robotic task sequencing with an effective robot-robot collision avoidance integration. In this paper, we present an efficient iterative algorithm that generates a high-quality solution for Multi-Robotic Task Sequencing Problem. This latter manages not only the mentioned robotic factors but also aspects related to accessibility constraints and mutual collision avoidance. In addition, a home-developed planner ( RoboTSPlanner ) handling 6 axis has been validated in a real case scenario. In order to ensure the completeness of the proposed methodology, we perform an optimization in the task, configuration and coordination space in a synergistic way. Comparing to the existing approaches, both simulation and real experiments reveal positive results in terms of cycle time and show the ability of this method to be interfaced with both industrial simulation software and ROS-I tools.Peer ReviewedPostprint (published version

    SWIR Camera-Based Localization and Mapping in Challenging Environments

    Get PDF
    International audienceThis paper assesses a monocular localization system for complex scenes. The system is carried by a moving agent in a complex environment (smoke, darkness, indoor-outdoor transitions). We show howusing a short-wave infrared camera (SWIR) with a potential lightingsource is a good compromise that allows to make just a slight adaptationof classical simultaneous localization and mapping (SLAM) techniques.This choice made it possible to obtain relevant features from SWIR images and also to limit tracking failures due to the lack of key points insuch challenging environments. In addition, we propose a tracking failure recovery strategy in order to allow tracking re-initialization with orwithout the use of other sensors. Our localization system is validatedusing real datasets generated from a moving SWIR-camera in indoor environment. Obtained results are promising, and lead us to consider theintegration of our mono-SLAM in a complete localization chain includinga data fusion process from several sensors

    Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios

    Get PDF
    International audienceThis paper introduces a Wearable SLAM system that performs indoor and outdoor SLAM in real time. The related project is part of the MALIN challenge which aims at creating a system to track emergency response agents in complex scenarios (such as dark environments, smoked rooms, repetitive patterns, building floor transitions and doorway crossing problems), where GPS technology is insufficient or inoperative. The proposed system fuses different SLAM technologies to compensate the lack of robustness of each, while estimating the pose individually. LiDAR and visual SLAM are fused with an inertial sensor in such a way that the system is able to maintain GPS coordinates that are sent via radio to a ground station, for real-time tracking. More specifically, LiDAR and monocular vision technologies are tested in dynamic scenarios where the main advantages of each have been evaluated and compared. Finally, 3D reconstruction up to three levels of details is performed
    • …
    corecore